35 research outputs found

    Experimental and Analytical Techniques for Evaluating the Impact of Thermal Barrier Coatings on Low Temperature Combustion

    Get PDF
    Homogeneous Charge Compression Ignition (HCCI), exhibits many fundamentally attractive thermodynamic characteristics. These traits, along with lean charge and low combustion temperatures, generally act to increase thermal efficiency relative to competing spark and/or compression ignition strategies. However, HCCI\u27s extreme sensitivity to in-cylinder thermal conditions, place limits on practical implementation. Thus, at low temperatures, combustion remains incomplete limiting cycle efficiency while increasing emissions. In contrast, the introduction of thermal barrier coatings (TBCs) to in-cylinder surfaces has been shown to fundamentally alter gas-wall interactions. The work in this dissertation explores HCCI/TBC synergies. Both experimental and analytical pathways are explored, attempting to illuminate the impact(s) of coatings on engine heat transfer and combustion metrics. Efforts to correlate TBC thermophysical properties and surface phenomena with HCCI performance and emissions are also explored. Finally, methods are proposed to evaluate the TBC-gas interaction as it relates to thermal stratification of the in-cylinder charge. The present work seeks to identify, and eventually quantify HCCI/TBC synergies. A specific research effort is developed, attempting to illuminate the impact(s) of TBCs on fundamental HCCI combustion metrics. Efforts to correlate TBC thermophysical properties and surface phenomena with HCCI performance and emissions are also proposed. Analysis is enabled through complimentary analytic and experimental pathways - which includes specialized solution methodology and experimental hardware. Combined, these tools enable a more complete qualitative assessment of thermal barrier coating\u27s impact on engine performance and emissions metrics, heat loss at the wall, and ultimately thermal stratification of the in-cylinder temperature field

    SOS Is Not Obviously Automatizable, Even Approximately

    Get PDF
    Suppose we want to minimize a polynomial p(x) = p(x_1,...,x_n), subject to some polynomial constraints q_1(x),...,q_m(x) >_ 0, using the Sum-of-Squares (SOS) SDP hierarachy. Assume we are in the "explicitly bounded" ("Archimedean") case where the constraints include x_i^2 <_ 1 for all 1 <_ i <_ n. It is often stated that the degree-d version of the SOS hierarchy can be solved, to high accuracy, in time n^O(d). Indeed, I myself have stated this in several previous works. The point of this note is to state (or remind the reader) that this is not obviously true. The difficulty comes not from the "r" in the Ellipsoid Algorithm, but from the "R"; a priori, we only know an exponential upper bound on the number of bits needed to write down the SOS solution. An explicit example is given of a degree-2 SOS program illustrating the difficulty

    Genetic diversity and genetic structuring at multiple spatial scales across the range of the Northern Leopard Frog, Rana pipiens

    Get PDF
    Despite a thorough understanding of the proximate mechanisms that drive genetic diversity, we are still very poor at predicting the genetic diversity of natural populations. Understanding patterns of genetic diversity is important for many reasons, including predicting species\u27 adaptation to climate change and predicting the spread of invasive species, but it is particularly important for species that are declining. This dissertation attempts to explain patterns in genetic diversity at multiple spatial scales across the range of the Northern Leopard Frog, Rana pipiens, which is declining across large portions of its range. Genetic diversity is often lower in edge populations than in central populations. Genetic diversity may be reduced in edge populations per se, or populations that occur at the edge of the species\u27 range may have low diversity because they have recently expanded into new habitat and thus show signs of founder effects. In Chapter 2, we tested several alternative hypotheses to explain genetic diversity across the species\u27 range, and to explain why some edge populations may not show reduced genetic diversity. We found that genetic diversity was reduced in edge populations relative to central populations, but was not reduced in populations in previously glaciated areas relative to previously unglaciated areas; therefore position at range edge had a stronger effect in reducing diversity than recent colonization of new habitat. We found that genetic diversity declined linearly towards the range edge in one of two transects from range center to range edge. We concluded that genetic diversity in this species is generally reduced by position at the range edge, but that this effect may differ among edges. In Chapter 3, we tested the hypothesis that eastern and western populations were genetically distinct. We found two distinct clades that introgress in some markers but are distinct and defined by narrow boundaries in the eastern Great Lakes region in others. We concluded that genetic diversity in the Mississippi River region was elevated by the introgression of descendants from two Pleistocene refugia. In Chapter 4, we analyzed genetic diversity within populations throughout Arizona to assess potential source populations for reintroductions. We also analyzed mitochondrial DNA to determine whether any populations contained genetic material not native to the region. Populations in one area had high genetic diversity and high gene flow among populations, but also contained evidence of introduction of eastern frogs. We conclude that supplementing genetic diversity in other populations with translocations from this area is not recommended

    Quantum Automata Cannot Detect Biased Coins, Even in the Limit

    Get PDF
    Aaronson and Drucker (2011) asked whether there exists a quantum finite automaton that can distinguish fair coin tosses from biased ones by spending significantly more time in accepting states, on average, given an infinite sequence of tosses. We answer this question negatively

    Sherali - Adams Strikes Back

    Get PDF
    Let G be any n-vertex graph whose random walk matrix has its nontrivial eigenvalues bounded in magnitude by 1/sqrt{Delta} (for example, a random graph G of average degree Theta(Delta) typically has this property). We show that the exp(c (log n)/(log Delta))-round Sherali - Adams linear programming hierarchy certifies that the maximum cut in such a G is at most 50.1 % (in fact, at most 1/2 + 2^{-Omega(c)}). For example, in random graphs with n^{1.01} edges, O(1) rounds suffice; in random graphs with n * polylog(n) edges, n^{O(1/log log n)} = n^{o(1)} rounds suffice. Our results stand in contrast to the conventional beliefs that linear programming hierarchies perform poorly for max-cut and other CSPs, and that eigenvalue/SDP methods are needed for effective refutation. Indeed, our results imply that constant-round Sherali - Adams can strongly refute random Boolean k-CSP instances with n^{ceil[k/2] + delta} constraints; previously this had only been done with spectral algorithms or the SOS SDP hierarchy

    Quantum Approximate Counting with Nonadaptive Grover Iterations

    Get PDF
    Approximate Counting refers to the problem where we are given query access to a function f : [N] ? {0,1}, and we wish to estimate K = #{x : f(x) = 1} to within a factor of 1+? (with high probability), while minimizing the number of queries. In the quantum setting, Approximate Counting can be done with O(min (?{N/?}, ?{N/K} / ?) queries. It has recently been shown that this can be achieved by a simple algorithm that only uses "Grover iterations"; however the algorithm performs these iterations adaptively. Motivated by concerns of computational simplicity, we consider algorithms that use Grover iterations with limited adaptivity. We show that algorithms using only nonadaptive Grover iterations can achieve O(?{N/?}) query complexity, which is tight

    On Closeness to k-Wise Uniformity

    Get PDF
    A probability distribution over {-1, 1}^n is (epsilon, k)-wise uniform if, roughly, it is epsilon-close to the uniform distribution when restricted to any k coordinates. We consider the problem of how far an (epsilon, k)-wise uniform distribution can be from any globally k-wise uniform distribution. We show that every (epsilon, k)-wise uniform distribution is O(n^{k/2}epsilon)-close to a k-wise uniform distribution in total variation distance. In addition, we show that this bound is optimal for all even k: we find an (epsilon, k)-wise uniform distribution that is Omega(n^{k/2}epsilon)-far from any k-wise uniform distribution in total variation distance. For k=1, we get a better upper bound of O(epsilon), which is also optimal. One application of our closeness result is to the sample complexity of testing whether a distribution is k-wise uniform or delta-far from k-wise uniform. We give an upper bound of O(n^{k}/delta^2) (or O(log n/delta^2) when k = 1) on the required samples. We show an improved upper bound of O~(n^{k/2}/delta^2) for the special case of testing fully uniform vs. delta-far from k-wise uniform. Finally, we complement this with a matching lower bound of Omega(n/delta^2) when k = 2. Our results improve upon the best known bounds from [Alon et al., 2007], and have simpler proofs

    Pauli Error Estimation via Population Recovery

    Get PDF
    Motivated by estimation of quantum noise models, we study the problem of learning a Pauli channel, or more generally the Pauli error rates of an arbitrary channel. By employing a novel reduction to the "Population Recovery" problem, we give an extremely simple algorithm that learns the Pauli error rates of an nn-qubit channel to precision ϵ\epsilon in \ell_\infty using just O(1/ϵ2)log(n/ϵ)O(1/\epsilon^2) \log(n/\epsilon) applications of the channel. This is optimal up to the logarithmic factors. Our algorithm uses only unentangled state preparation and measurements, and the post-measurement classical runtime is just an O(1/ϵ)O(1/\epsilon) factor larger than the measurement data size. It is also impervious to a limited model of measurement noise where heralded measurement failures occur independently with probability 1/4\le 1/4. We then consider the case where the noise channel is close to the identity, meaning that the no-error outcome occurs with probability 1η1-\eta. In the regime of small η\eta we extend our algorithm to achieve multiplicative precision 1±ϵ1 \pm \epsilon (i.e., additive precision ϵη\epsilon \eta) using just O(1ϵ2η)log(n/ϵ)O\bigl(\frac{1}{\epsilon^2 \eta}\bigr) \log(n/\epsilon) applications of the channel

    One Time-traveling Bit is as Good as Logarithmically Many

    Get PDF
    We consider computation in the presence of closed timelike curves (CTCs), as proposed by Deutsch. We focus on the case in which the CTCs carry classical bits (as opposed to qubits). Previously, Aaronson and Watrous showed that computation with polynomially many CTC bits is equivalent in power to PSPACE. On the other hand, Say and Yakaryilmaz showed that computation with just 1 classical CTC bit gives the power of "postselection", thereby upgrading classical randomized computation (BPP) to the complexity class BPP_path and standard quantum computation (BQP) to the complexity class PP. It is natural to ask whether increasing the number of CTC bits from 1 to 2 (or 3, 4, etc.) leads to increased computational power. We show that the answer is no: randomized computation with logarithmically many CTC bits (i.e., polynomially many CTC states) is equivalent to BPP_path. (Similarly, quantum computation augmented with logarithmically many classical CTC bits is equivalent to PP.) Spoilsports with no interest in time travel may view our results as concerning the robustness of the class BPP_path and the computational complexity of sampling from an implicitly defined Markov chain
    corecore